Goto

Collaborating Authors

 clearview ai


CBP Signs Clearview AI Deal to Use Face Recognition for 'Tactical Targeting'

WIRED

US Border Patrol intelligence units will gain access to a face recognition tool built on billions of images scraped from the internet. United States Customs and Border Protection plans to spend $225,000 for a year of access to Clearview AI, a face recognition tool that compares photos against billions of images scraped from the internet . The deal extends access to Clearview tools to Border Patrol's headquarters intelligence division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what CBP calls a coordinated effort to "disrupt, degrade, and dismantle" people and networks viewed as security threats. The contract states that Clearview provides access to "over 60+ billion publicly available images" and will be used for "tactical targeting" and "strategic counter-network analysis," indicating the service is intended to be embedded in analysts' day-to-day intelligence work rather than reserved for isolated investigations. CBP says its intelligence units draw from a "variety of sources," including commercially available tools and publicly available data, to identify people and map their connections for national security and immigration operations.


A Controversial Facial-Recognition Company Quietly Expands Into Latin America

TIME - Tech

For the past three months, a small encrypted group chat of Latin American officials who investigate online child-exploitation cases has been lighting up with reports of raids, arrests, and rescued minors in half a dozen countries. The successes are the result of a recent trial of a facial-recognition tool given to a group of Latin American law-enforcement officials, investigators, and prosecutors by the American company Clearview AI. During a five-day operation in Ecuador in early March, participants from 10 countries including Argentina, Brazil, Colombia, the Dominican Republic, El Salvador, and Peru were given access to Clearview's technology, which allows them to upload images and run them through a database of billions of public photos scraped from the Internet. "Normally it takes at least several days for a child to be identified, and sometimes there are victims that have not been identified for years," says Guillermo Galarza Abizaid, the vice president in charge of partnerships and law enforcement at the Virginia-based nonprofit International Centre for Missing and Exploited Children (ICMEC), which organized the event. The group used the facial-recognition tool to analyze a total of 2,198 images and 995 videos, hundreds of them from cold cases.


If Clearview AI scanned your face, you may get equity in the company

Engadget

Controversial facial recognition company Clearview AI has agreed to an unusual settlement to a class action lawsuit, The New York Times reports. Rather than paying cash, the company would provide a 23 percent stake in its company to any Americans in its database. Without the settlement, Clearview could go bankrupt, according to court documents. If you live in the US and have ever posted a photo of yourself publicly online, you may be part of the class action. The settlement could amount to at least 50 million according to court documents, It still must be approved by a federal judge.


Application of the NIST AI Risk Management Framework to Surveillance Technology

Swaminathan, Nandhini, Danks, David

arXiv.org Artificial Intelligence

This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF. Surveillance technologies are increasingly widespread in both public and private spaces, often being developed and deployed with little engagement from relevant stakeholders. Most notably, the individuals subject to the surveillance technology are rarely included in creating that technology. As an illustration of both prominence and controversy, one may consider the AI system developed by Clearview AI Inc. to monitor and record the activities of individuals and groups, including rapid face identification. Their system has come under close scrutiny for the ways that the organization scraped images and training data from the Internet; the company is currently under investigation in multiple jurisdictions for scraping billions of images from social media sites without users' consent [1, 2], and other companies like Facebook, Twitter, Venmo, and Google have issued cease and desist letters citing violations of their terms of service [3].


A Facial-Recognition Tour of New York

The New Yorker

Kashmir Hill, the author of the new book "Your Face Belongs to Us," took a walk around midtown the other day, to check out a few businesses that routinely capture visitors' biometric data. She wore a red coat and white boots, and her hair was a faded purple. "Let's see if Macy's is still collecting face-recognition data," she said. Businesses that do so are required by city law to post signs alerting visitors. She'd noticed, earlier, that the store's signs were "very affixed to their walls."


Ukraine's 'Secret Weapon' Against Russia Is a Controversial U.S. Tech Company

TIME - Tech

Leonid Tymchenko spent the first month of Russia's invasion sitting in his dark government office after curfew. Unable to go home, Ukraine's Deputy Minister of Internal Affairs scrolled through Telegram, looking at thousands of videos and images of advancing Russian soldiers. When Tymchenko was offered a chance to test a new facial-recognition tool, he uploaded some of the photos to try it out. He could not believe the results. Every time Tymchenko added a photo of a Russian soldier, the software, made by the American facial-recognition company Clearview AI, seemed to come back with an exact hit, linking to pages that revealed the soldier's name, hometown, and social-media profile.


Privacy in the Age of AI

Communications of the ACM

In January 2020, privacy journalist Kashmir Hill published an article in The New York Times describing Clearview AI--a company that purports to help U.S. law enforcement match photos of unknown people to their online presence through a facial recognition model trained by scraping millions of publicly available face images online.a In 2021, police departments in many different U.S. cities were reported to have used Clearview AI to, for example, identify Black Lives Matter protestors.b In 2022, a California-based artist found that photos she thought to be in her private medical record were included, without her knowledge or consent, in the LAION training dataset that has been used to train Stable Diffusion and Google Imagen.c The artist has a rare medical condition she prefers to keep private and expressed concern about the abuse potential of generative AI technologies having access to her photos. In January 2023, Twitch streamer QTCinderella made an emphatic plea to her followers on Twitter to stop spreading links to an illicit website hosting AI-generated "deep fake" pornography of her and other women influencers.


FBI Agents Are Using Face Recognition Without Proper Training

WIRED

The US Federal Bureau of Investigation (FBI) has done tens of thousands of face recognition searches using software from outside providers in recent years. Yet only 5 percent of the 200 agents with access to the technology have taken the bureau's three-day training course on how to use it, a report from the Government Accountability Office (GAO) this month reveals. The bureau has no policy for face recognition use in place to protect privacy, civil rights, or civil liberties. Lawmakers and others concerned about face recognition have said that adequate training on the technology and how to interpret its output is needed to reduce improper use or errors, although some experts say training can lull law enforcement and the public into thinking face recognition is low risk. Since the false arrest of Robert Williams near Detroit in 2020, multiple instances have surfaced in the US of arrests after a face recognition model wrongly identified a person.


'Are you kidding, carjacking?': The problem with facial recognition in policing

The Guardian

Porcha Woodruff was eight months pregnant when police in Detroit, Michigan came to arrest her on charges of carjacking and robbery. She was getting her two children ready for school when six police officers knocked on her door and presented her with an arrest warrant. She thought it was a prank. Do you see that I am eight months pregnant?" the lawsuit Woodruff filed against Detroit police reads. She sent her children upstairs to tell her fiance that "Mommy's going to jail". She was detained and questioned for 11 hours and released on a $100,000 bond. She immediately went to the hospital, where she was treated for dehydration. Woodruff later found out that she was the latest victim of false identification by facial recognition. After her image was incorrectly matched to video footage of a woman at the gas station where the carjacking took place, her picture was shown to the victim in a photo lineup. According to the lawsuit, the victim allegedly chose Woodruff's picture as the woman ...


Police are using invasive facial recognition software to put every American in a perpetual lineup

FOX News

Face ID utilizes facial recognition technology to scan your face and verify your identity. When activated, the feature uses the front-facing camera; or selfie cam, to securely authenticate you are the owner of the iPhone. Think twice before posting that selfie on Facebook; you might be added to a police database. CLICK TO GET KURT'S CYBERGUY NEWSLETTER WITH QUICK TIPS, TECH REVIEWS, SECURITY ALERTS AND EASY HOW-TO'S TO MAKE YOU SMARTER With all the excitement around AI, thanks to ChatGPT, many are cheering for this technology and loving its optimizing powers. Unfortunately, this onion has many layers; the deeper we go, the stinkier it gets.